Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Biomed Phys Eng Express ; 10(3)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38588646

RESUMO

Objective.In current radiograph-based intra-fraction markerless target-tracking, digitally reconstructed radiographs (DRRs) from planning CTs (CT-DRRs) are often used to train deep learning models that extract information from the intra-fraction radiographs acquired during treatment. Traditional DRR algorithms were designed for patient alignment (i.e.bone matching) and may not replicate the radiographic image quality of intra-fraction radiographs at treatment. Hypothetically, generating DRRs from pre-treatment Cone-Beam CTs (CBCT-DRRs) with DRR algorithms incorporating physical modelling of on-board-imagers (OBIs) could improve the similarity between intra-fraction radiographs and DRRs by eliminating inter-fraction variation and reducing image-quality mismatches between radiographs and DRRs. In this study, we test the two hypotheses that intra-fraction radiographs are more similar to CBCT-DRRs than CT-DRRs, and that intra-fraction radiographs are more similar to DRRs from algorithms incorporating physical models of OBI components than DRRs from algorithms omitting these models.Approach.DRRs were generated from CBCT and CT image sets collected from 20 patients undergoing pancreas stereotactic body radiotherapy. CBCT-DRRs and CT-DRRs were generated replicating the treatment position of patients and the OBI geometry during intra-fraction radiograph acquisition. To investigate whether the modelling of physical OBI components influenced radiograph-DRR similarity, four DRR algorithms were applied for the generation of CBCT-DRRs and CT-DRRs, incorporating and omitting different combinations of OBI component models. The four DRR algorithms were: a traditional DRR algorithm, a DRR algorithm with source-spectrum modelling, a DRR algorithm with source-spectrum and detector modelling, and a DRR algorithm with source-spectrum, detector and patient material modelling. Similarity between radiographs and matched DRRs was quantified using Pearson's correlation and Czekanowski's index, calculated on a per-image basis. Distributions of correlations and indexes were compared to test each of the hypotheses. Distribution differences were determined to be statistically significant when Wilcoxon's signed rank test and the Kolmogorov-Smirnov two sample test returnedp≤ 0.05 for both tests.Main results.Intra-fraction radiographs were more similar to CBCT-DRRs than CT-DRRs for both metrics across all algorithms, with allp≤ 0.007. Source-spectrum modelling improved radiograph-DRR similarity for both metrics, with allp< 10-6. OBI detector modelling and patient material modelling did not influence radiograph-DRR similarity for either metric.Significance.Generating DRRs from pre-treatment CBCT-DRRs is feasible, and incorporating CBCT-DRRs into markerless target-tracking methods may promote improved target-tracking accuracies. Incorporating source-spectrum modelling into a treatment planning system's DRR algorithms may reinforce the safe treatment of cancer patients by aiding in patient alignment.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Neoplasias Pancreáticas , Radiocirurgia , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Radiocirurgia/métodos , Neoplasias Pancreáticas/radioterapia , Neoplasias Pancreáticas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Aprendizado Profundo , Tomografia Computadorizada por Raios X/métodos , Pâncreas/diagnóstico por imagem , Pâncreas/cirurgia , Imagens de Fantasmas
2.
Med Phys ; 50(7): 4206-4219, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37029643

RESUMO

BACKGROUND: Using radiation therapy (RT) to treat head and neck (H&N) cancers requires precise targeting of the tumor to avoid damaging the surrounding healthy organs. Immobilisation masks and planning target volume margins are used to attempt to mitigate patient motion during treatment, however patient motion can still occur. Patient motion during RT can lead to decreased treatment effectiveness and a higher chance of treatment related side effects. Tracking tumor motion would enable motion compensation during RT, leading to more accurate dose delivery. PURPOSE: The purpose of this paper is to develop a method to detect and segment the tumor in kV images acquired during RT. Unlike previous tumor segmentation methods for kV images, in this paper, a process for generating realistic and synthetic CT deformations was developed to augment the training data and make the segmentation method robust to patient motion. Detecting the tumor in 2D kV images is a necessary step toward 3D tracking of the tumor position during treatment. METHOD: In this paper, a conditional generative adversarial network (cGAN) is presented that can detect and segment the gross tumor volume (GTV) in kV images acquired during H&N RT. Retrospective data from 15 H&N cancer patients obtained from the Cancer Imaging Archive were used to train and test patient-specific cGANs. The training data consisted of digitally reconstructed radiographs (DRRs) generated from each patient's planning CT and contoured GTV. Training data was augmented by using synthetically deformed CTs to generate additional DRRs (in total 39 600 DRRs per patient or 25 200 DRRs for nasopharyngeal patients) containing realistic patient motion. The method for deforming the CTs was a novel deformation method based on simulating head rotation and internal tumor motion. The testing dataset consisted of 1080 DRRs for each patient, obtained by deforming the planning CT and GTV at different magnitudes to the training data. The accuracy of the generated segmentations was evaluated by measuring the segmentation centroid error, Dice similarity coefficient (DSC) and mean surface distance (MSD). This paper evaluated the hypothesis that when patient motion occurs, using a cGAN to segment the GTV would create a more accurate segmentation than no-tracking segmentations from the original contoured GTV, the current standard-of-care. This hypothesis was tested using the 1-tailed Mann-Whitney U-test. RESULTS: The magnitude of our cGAN segmentation centroid error was (mean ± standard deviation) 1.1 ± 0.8 mm and the DSC and MSD values were 0.90 ± 0.03 and 1.6 ± 0.5 mm, respectively. Our cGAN segmentation method reduced the segmentation centroid error (p < 0.001), and MSD (p = 0.031) when compared to the no-tracking segmentation, but did not significantly increase the DSC (p = 0.294). CONCLUSIONS: The accuracy of our cGAN segmentation method demonstrates the feasibility of this method for H&N cancer patients during RT. Accurate tumor segmentation of H&N tumors would allow for intrafraction monitoring methods to compensate for tumor motion during treatment, ensuring more accurate dose delivery and enabling better H&N cancer patient outcomes.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Humanos , Estudos Retrospectivos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Radiografia , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos
3.
Phys Med Biol ; 68(9)2023 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-36963116

RESUMO

Objective. Using MV images for real-time image guided radiation therapy (IGRT) is ideal as it does not require additional imaging equipment, adds no additional imaging dose and provides motion data in the treatment beam frame of reference. However, accurate tracking using MV images is challenging due to low contrast and modulated fields. Here, a novel real-time marker tracking system based on a convolutional neural network (CNN) classifier was developed and evaluated on retrospectively acquired patient data for MV-based IGRT for prostate cancer patients.Approach. MV images, acquired from 29 volumetric modulated arc therapy (VMAT) prostate cancer patients treated in a multi-institutional clinical trial, were used to train and evaluate a CNN-based marker tracking system. The CNN was trained using labelled MV images from 9 prostate cancer patients (35 fractions) with implanted markers. CNN performance was evaluated on an independent cohort of unseen MV images from 20 patients (78 fractions), using a Precision-Recall curve (PRC), area under the PRC plot (AUC) and sensitivity and specificity. The accuracy of the tracking system was evaluated on the same unseen dataset and quantified by calculating mean absolute (±1 SD) and [1st, 99th] percentiles of the geometric tracking error in treatment beam co-ordinates using manual identification as the ground truth.Main results. The CNN had an AUC of 0.99, sensitivity of 98.31% and specificity of 99.87%. The mean absolute geometric tracking error was 0.30 ± 0.27 and 0.35 ± 0.31 mm in the lateral and superior-inferior directions of the MV images, respectively. The [1st, 99th] percentiles of the error were [-1.03, 0.90] and [-1.12, 1.12] mm in the lateral and SI directions, respectively.Significance. The high classification performance on unseen MV images demonstrates the CNN can successfully identify implanted prostate markers. Furthermore, the sub-millimetre accuracy and precision of the marker tracking system demonstrates potential for adaptation to real-time applications.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Radioterapia Guiada por Imagem , Humanos , Masculino , Redes Neurais de Computação , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Radioterapia Guiada por Imagem/métodos , Estudos Retrospectivos
4.
Biomed Phys Eng Express ; 9(3)2023 03 07.
Artigo em Inglês | MEDLINE | ID: mdl-36689758

RESUMO

Real-time target position verification during pancreas stereotactic body radiation therapy (SBRT) is important for the detection of unplanned tumour motions. Fast and accurate fiducial marker segmentation is a Requirement of real-time marker-based verification. Deep learning (DL) segmentation techniques are ideal because they don't require additional learning imaging or prior marker information (e.g., shape, orientation). In this study, we evaluated three DL frameworks for marker tracking applied to pancreatic cancer patient data. The DL frameworks evaluated were (1) a convolutional neural network (CNN) classifier with sliding window, (2) a pretrained you-only-look-once (YOLO) version-4 architecture, and (3) a hybrid CNN-YOLO. Intrafraction kV images collected during pancreas SBRT treatments were used as training data (44 fractions, 2017 frames). All patients had 1-4 implanted fiducial markers. Each model was evaluated on unseen kV images (42 fractions, 2517 frames). The ground truth was calculated from manual segmentation and triangulation of markers in orthogonal paired kV/MV images. The sensitivity, specificity, and area under the precision-recall curve (AUC) were calculated. In addition, the mean-absolute-error (MAE), root-mean-square-error (RMSE) and standard-error-of-mean (SEM) were calculated for the centroid of the markers predicted by the models, relative to the ground truth. The sensitivity and specificity of the CNN model were 99.41% and 99.69%, respectively. The AUC was 0.9998. The average precision of the YOLO model for different values of recall was 96.49%. The MAE of the three models in the left-right, superior-inferior, and anterior-posterior directions were under 0.88 ± 0.11 mm, and the RMSE were under 1.09 ± 0.12 mm. The detection times per frame on a GPU were 48.3, 22.9, and 17.1 milliseconds for the CNN, YOLO, and CNN-YOLO, respectively. The results demonstrate submillimeter accuracy of marker position predicted by DL models compared to the ground truth. The marker detection time was fast enough to meet the requirements for real-time application.


Assuntos
Aprendizado Profundo , Neoplasias Pancreáticas , Humanos , Marcadores Fiduciais , Movimento (Física) , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/radioterapia , Neoplasias Pancreáticas
5.
J Med Imaging Radiat Oncol ; 65(5): 596-611, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34288501

RESUMO

During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real-time image-guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)-based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker-based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker-based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.


Assuntos
Movimento (Física) , Radioterapia (Especialidade) , Inteligência Artificial , Humanos , Planejamento da Radioterapia Assistida por Computador , Radioterapia Guiada por Imagem , Radioterapia de Intensidade Modulada
6.
Med Phys ; 46(5): 2286-2297, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30929254

RESUMO

PURPOSE: Real-time image-guided adaptive radiation therapy (IGART) requires accurate marker segmentation to resolve three-dimensional (3D) motion based on two-dimensional (2D) fluoroscopic images. Most common marker segmentation methods require prior knowledge of marker properties to construct a template. If marker properties are not known, an additional learning period is required to build the template which exposes the patient to an additional imaging dose. This work investigates a deep learning-based fiducial marker classifier for use in real-time IGART that requires no prior patient-specific data or additional learning periods. The proposed tracking system uses convolutional neural network (CNN) models to segment cylindrical and arbitrarily shaped fiducial markers. METHODS: The tracking system uses a tracking window approach to perform sliding window classification of each implanted marker. Three cylindrical marker training datasets were generated from phantom kilovoltage (kV) and patient intrafraction images with increasing levels of megavoltage (MV) scatter. The cylindrical shaped marker CNNs were validated on unseen kV fluoroscopic images from 12 fractions of 10 prostate cancer patients with implanted gold fiducials. For the training and validation of the arbitrarily shaped marker CNNs, cone beam computed tomography (CBCT) projection images from ten fractions of seven lung cancer patients with implanted coiled markers were used. The arbitrarily shaped marker CNNs were trained using three patients and the other four unseen patients were used for validation. The effects of full training using a compact CNN (four layers with learnable weights) and transfer learning using a pretrained CNN (AlexNet, eight layers with learnable weights) were analyzed. Each CNN was evaluated using a Precision-Recall curve (PRC), the area under the PRC plot (AUC), and by the calculation of sensitivity and specificity. The tracking system was assessed using the validation data and the accuracy was quantified by calculating the mean error, root-mean-square error (RMSE) and the 1st and 99th percentiles of the error. RESULTS: The fully trained CNN on the dataset with moderate noise levels had a sensitivity of 99.00% and specificity of 98.92%. Transfer learning of AlexNet resulted in a sensitivity and specificity of 99.42% and 98.13%, respectively, for the same datasets. For the arbitrarily shaped marker CNNs, the sensitivity was 98.58% and specificity was 98.97% for the fully trained CNN. The transfer learning CNN had a sensitivity and specificity of 98.49% and 99.56%, respectively. The CNNs were successfully incorporated into a multiple object tracking system for both cylindrical and arbitrarily shaped markers. The cylindrical shaped marker tracking had a mean RMSE of 1.6 ± 0.2 pixels and 1.3 ± 0.4 pixels in the x- and y-directions, respectively. The arbitrarily shaped marker tracking had a mean RMSE of 3.0 ± 0.5 pixels and 2.2 ± 0.4 pixels in the x- and y-directions, respectively. CONCLUSION: With deep learning CNNs, high classification performances on unseen patient images were achieved for both cylindrical and arbitrarily shaped markers. Furthermore, the application of CNN models to intrafraction monitoring was demonstrated using a simple tracking system. The results demonstrate that CNN models can be used to track markers without prior knowledge of the marker properties or an additional learning period.


Assuntos
Aprendizado Profundo , Fracionamento da Dose de Radiação , Marcadores Fiduciais , Fluoroscopia/normas , Radioterapia Guiada por Imagem , Automação , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA